The eigenvalue equation $Ax = \lambda x$ represents a rare geometric condition where a matrix transformation acts simply by scaling a vector rather than rotating it. These "exceptional" vectors $x$ define the principal axes of the linear transformation.
The Geometry of exceptionality
For most vectors, $Ax$ points in a different direction than $x$. Eigenvectors are special because they stay on the same span (line) through the origin. The eigenvalue $\lambda$ tells us the magnitude of this stretch:
- $|\lambda| > 1$: Growth (stretching).
- $|\lambda| < 1$: Decay (shrinking).
- $\lambda < 0$: Reversal (flipping direction).
The equation $Ax = \lambda x$ can be rewritten as $(A - \lambda I)x = 0$. For a non-zero solution $x$ to exist, the matrix $(A - \lambda I)$ must be singular (non-invertible), meaning its determinant must be zero: $\det(A - \lambda I) = 0$.
If we shift a matrix by the identity matrix, the eigenvectors remain identical, but the eigenvalues shift by 1:
$Ax = \lambda x \implies (A+I)x = Ax + Ix = \lambda x + x = (\lambda + 1)x$
From Projection to Reflection
Understanding the geometry of a projection $P$ allows us to derive the reflection $R$ through the linear operator $R = 2P - I$.
If $x$ is an eigenvector of $P$ with eigenvalue $\lambda$, then:
$Rx = (2P - I)x = 2(Px) - Ix = 2(\lambda x) - x = (2\lambda - 1)x$
This explains why a projection (eigenvalues 1 and 0) transforms into a reflection (eigenvalues 1 and -1).